2,214 research outputs found
A Decentralized Mobile Computing Network for Multi-Robot Systems Operations
Collective animal behaviors are paradigmatic examples of fully decentralized
operations involving complex collective computations such as collective turns
in flocks of birds or collective harvesting by ants. These systems offer a
unique source of inspiration for the development of fault-tolerant and
self-healing multi-robot systems capable of operating in dynamic environments.
Specifically, swarm robotics emerged and is significantly growing on these
premises. However, to date, most swarm robotics systems reported in the
literature involve basic computational tasks---averages and other algebraic
operations. In this paper, we introduce a novel Collective computing framework
based on the swarming paradigm, which exhibits the key innate features of
swarms: robustness, scalability and flexibility. Unlike Edge computing, the
proposed Collective computing framework is truly decentralized and does not
require user intervention or additional servers to sustain its operations. This
Collective computing framework is applied to the complex task of collective
mapping, in which multiple robots aim at cooperatively map a large area. Our
results confirm the effectiveness of the cooperative strategy, its robustness
to the loss of multiple units, as well as its scalability. Furthermore, the
topology of the interconnecting network is found to greatly influence the
performance of the collective action.Comment: Accepted for Publication in Proc. 9th IEEE Annual Ubiquitous
Computing, Electronics & Mobile Communication Conferenc
Improved construction of irregular progressive edge-growth Tanner graphs
The progressive edge-growth algorithm is a well-known procedure to construct
regular and irregular low-density parity-check codes. In this paper, we propose
a modification of the original algorithm that improves the performance of these
codes in the waterfall region when constructing codes complying with both,
check and symbol node degree distributions. The proposed algorithm is thus
interesting if a family of irregular codes with a complex check node degree
distribution is used.Comment: 3 pages, 3 figure
Blind Reconciliation
Information reconciliation is a crucial procedure in the classical
post-processing of quantum key distribution (QKD). Poor reconciliation
efficiency, revealing more information than strictly needed, may compromise the
maximum attainable distance, while poor performance of the algorithm limits the
practical throughput in a QKD device. Historically, reconciliation has been
mainly done using close to minimal information disclosure but heavily
interactive procedures, like Cascade, or using less efficient but also less
interactive -just one message is exchanged- procedures, like the ones based in
low-density parity-check (LDPC) codes. The price to pay in the LDPC case is
that good efficiency is only attained for very long codes and in a very narrow
range centered around the quantum bit error rate (QBER) that the code was
designed to reconcile, thus forcing to have several codes if a broad range of
QBER needs to be catered for. Real world implementations of these methods are
thus very demanding, either on computational or communication resources or
both, to the extent that the last generation of GHz clocked QKD systems are
finding a bottleneck in the classical part. In order to produce compact, high
performance and reliable QKD systems it would be highly desirable to remove
these problems. Here we analyse the use of short-length LDPC codes in the
information reconciliation context using a low interactivity, blind, protocol
that avoids an a priori error rate estimation. We demonstrate that 2x10^3 bits
length LDPC codes are suitable for blind reconciliation. Such codes are of high
interest in practice, since they can be used for hardware implementations with
very high throughput.Comment: 22 pages, 8 figure
Untainted Puncturing for Irregular Low-Density Parity-Check Codes
Puncturing is a well-known coding technique widely used for constructing
rate-compatible codes. In this paper, we consider the problem of puncturing
low-density parity-check codes and propose a new algorithm for intentional
puncturing. The algorithm is based on the puncturing of untainted symbols, i.e.
nodes with no punctured symbols within their neighboring set. It is shown that
the algorithm proposed here performs better than previous proposals for a range
of coding rates and short proportions of punctured symbols.Comment: 4 pages, 3 figure
Rate Compatible Protocol for Information Reconciliation: An application to QKD
Information Reconciliation is a mechanism that allows to weed out the
discrepancies between two correlated variables. It is an essential component in
every key agreement protocol where the key has to be transmitted through a
noisy channel. The typical case is in the satellite scenario described by
Maurer in the early 90's. Recently the need has arisen in relation with Quantum
Key Distribution (QKD) protocols, where it is very important not to reveal
unnecessary information in order to maximize the shared key length. In this
paper we present an information reconciliation protocol based on a rate
compatible construction of Low Density Parity Check codes. Our protocol
improves the efficiency of the reconciliation for the whole range of error
rates in the discrete variable QKD context. Its adaptability together with its
low interactivity makes it specially well suited for QKD reconciliation
Fundamental Finite Key Limits for One-Way Information Reconciliation in Quantum Key Distribution
The security of quantum key distribution protocols is guaranteed by the laws
of quantum mechanics. However, a precise analysis of the security properties
requires tools from both classical cryptography and information theory. Here,
we employ recent results in non-asymptotic classical information theory to show
that one-way information reconciliation imposes fundamental limitations on the
amount of secret key that can be extracted in the finite key regime. In
particular, we find that an often used approximation for the information
leakage during information reconciliation is not generally valid. We propose an
improved approximation that takes into account finite key effects and
numerically test it against codes for two probability distributions, that we
call binary-binary and binary-Gaussian, that typically appear in quantum key
distribution protocols
A Discussion of Thin Client Technology for Computer Labs
Computer literacy is not negotiable for any professional in an increasingly
computerised environment. Educational institutions should be equipped to
provide this new basic training for modern life. Accordingly, computer labs are
an essential medium for education in almost any field. Computer labs are one of
the most popular IT infrastructures for technical training in primary and
secondary schools, universities and other educational institutions all over the
world. Unfortunately, a computer lab is expensive, in terms of both initial
purchase and annual maintenance costs, and especially when we want to run the
latest software. Hence, research efforts addressing computer lab efficiency,
performance or cost reduction would have a worldwide repercussion. In response
to this concern, this paper presents a survey on thin client technology for
computer labs in educational environments. Besides setting out the advantages
and drawbacks of this technology, we aim to refute false prejudices against
thin clients, identifying a set of educational scenarios where thin clients are
a better choice and others requiring traditional solutions
- …